Search Results: "Daniel Silverstone"

18 July 2017

Daniel Silverstone: Yay, finished my degree at last

A little while back, in June, I sat my last exam for what I hoped would be the last module in my degree. For seven years, I've been working on a degree with the Open University and have been taking advantage of the opportunity to have a somewhat self-directed course load by taking the 'Open' degree track. When asked why I bothered to do this, I guess my answer has been a little varied. In principle it's because I felt like I'd already done a year's worth of degree and didn't want it wasted, but it's also because I have been, in the dim and distant past, overlooked for jobs simply because I had no degree and thus was an easy "bin the CV". Fed up with this, I decided to commit to the Open University and thus began my journey toward 'qualification' in 2010. I started by transferring the level 1 credits from my stint at UCL back in 1998/1999 which were in a combination of basic programming in Java, some mathematics including things like RSA, and some psychology and AI courses which at the time were aiming at a degree called 'Computer Science with Cognitive Sciences'. Then I took level 2 courses, M263 (Building blocks of software), TA212 (The technology of music) and MS221 (Exploring mathematics). I really enjoyed the mathematics course and so... At level 3 I took MT365 (Graphs, networks and design), M362 (Developing concurrent distributed systems), TM351 (Data management and analysis - which I ended up hating), and finally finishing this June with TM355 (Communications technology). I received an email this evening telling me the module result for TM355 had been posted, and I logged in to find I had done well enough to be offered my degree. I could have claimed my degree 18+ months ago, but I persevered through another two courses in order to qualify for an honours degree which I have now been awarded. Since I don't particularly fancy any ceremonial awarding, I just went through the clicky clicky and accepted my qualification of 'Batchelor of Science (Honours) Open, Upper Second-class Honours (2.1)' which grants me the letters 'BSc (Hons) Open (Open)' which, knowing me, will likely never even make it onto my CV because I'm too lazy. It has been a significant effort, over the course of the past few years, to complete a degree without giving up too much of my personal commitments. In addition to earning the degree, I have worked, for six of the seven years it has taken, for Codethink doing interesting work in and around Linux systems and Trustable software. I have designed and built Git server software which is in use in some universities, and many companies, along with a good few of my F/LOSS colleagues. And I've still managed to find time to attend plays, watch films, read an average of 2 novel-length stories a week (some of which were even real books), and be a member of the Manchester Hackspace. Right now, I'm looking forward to a stress free couple of weeks, followed by an immense amount of fun at Debconf17 in Montr al!

8 July 2017

Daniel Silverstone: Gitano - Approaching Release - Access Control Changes

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects. In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:
define is_steve user exact steve
allow "Steve can read my repo" is_steve op_read
And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:
define is_jeff user exact jeff
define is_steve user exact steve
define readers anyof is_jeff is_steve
allow "Steve and Jeff can read my repo" readers op_read
This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:
allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]
Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:
define readers anyof [user exact jeff] [user exact steve] [user exact susan]
allow "My friends can read my repo" op_read readers
The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better. If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:
  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.

No more 'basic' matches Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/$ user /. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy. To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is. This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.
  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano
If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system. Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

1 July 2017

Daniel Silverstone: F/LOSS activity, June 2017

It seems to be becoming popular to send a mail each month detailing your free software work for that month. I have been slowly ramping my F/LOSS activity back up, after years away where I worked on completing my degree. My final exam for that was in June 2017 and as such I am now in a position to try and get on with more F/LOSS work. My focus, as you might expect, has been on Gitano which reached 1.0 in time for Stretch's release and which is now heading gently toward a 1.1 release which we have timed for Debconf 2017. My friend a colleague Richard has been working hard on Gitano and related components during this time too, and I hope that Debconf will be an opportunity for him to meet many of my Debian friends too. But enough of that, back to the F/LOSS. We've been running Gitano developer days roughly monthly since March of 2017, and the June developer day was attended by myself, Richard Maw, and Richard Ipsum. You are invited to read the wiki page for the developer day if you want to know exactly what we got up to, but a summary of my involvement that day is: Other than that, related to Gitano during June I: My non-Gitano F/LOSS related work in June has been entirely centred around the support I provide to the Lua community in the form of the Lua mailing list and website. The host on which it's run is ailing, and I've had to spend time trying to improve and replace that. Hopefully I'll have more to say next month. Perhaps by doing this reporting I'll get more F/LOSS done. Of course, July's report will be sent out while I'm in Montr al for debconf 2017 (or at least for debcamp at that point) so hopefully more to say anyway.

5 May 2017

Daniel Silverstone: Yarn architecture discussion

Recently Rob and I visited Soile and Lars. We had a lovely time wandering around Helsinki with them, and I also spent a good chunk of time with Lars working on some design and planning for the Yarn test specification and tooling. You see, I wrote a Rust implementation of Yarn called rsyarn "for fun" and in doing so I noted a bunch of missing bits in the understanding Lars and I shared about how Yarn should work. Lars and I filled, and re-filled, a whiteboard with discussion about what the 'Yarn specification' should be, about various language extensions and changes, and also about what functionality a normative implementation of Yarn should have. This article is meant to be a write-up of all of that discussion, but before I start on that, I should probably summarise what Yarn is.
Yarn is a mechanism for specifying tests in a form which is more like documentation than code. Yarn follows the concept of BDD story based design/testing and has a very Cucumberish scenario language in which to write tests. Yarn takes, as input, Markdown documents which contain code blocks with Yarn tests in them; and it then runs those tests and reports on the scenario failures/successes. As an example of a poorly written but still fairly effective Yarn suite, you could look at Gitano's tests or perhaps at Obnam's tests (rendered as HTML). Yarn is not trying to replace unit testing, nor other forms of testing, but rather seeks to be one of a suite of test tools used to help validate software and to verify integrations. Lars writes Yarns which test his server setups for example. As an example, lets look at what a simple test might be for the behaviour of the /bin/true tool:
SCENARIO true should exit with code zero
WHEN /bin/true is run with no arguments
THEN the exit code is 0
 AND stdout is empty
 AND stderr is empty
Anyone ought to be able to understand exactly what that test is doing, even though there's no obvious code to run. Yarn statements are meant to be easily grokked by both developers and managers. This should be so that managers can understand the tests which verify that requirements are being met, without needing to grok python, shell, C, or whatever else is needed to implement the test where the Yarns meet the metal. Obviously, there needs to be a way to join the dots, and Yarn calls those things IMPLEMENTS, for example:
IMPLEMENTS WHEN (\S+) is run with no arguments
set +e
"$ MATCH_1 " > "$ DATADIR /stdout" 2> "$ DATADIR /stderr"
echo $? > "$ DATADIR /exitcode"
As you can see from the example, Yarn IMPLEMENTS can use regular expressions to capture parts of their invocation, allowing the test implementer to handle many different scenario statements with one implementation block. For the rest of the implementation, whatever you assume about things will probably be okay for now.
Given all of the above, we (Lars and I) decided that it would make a lot of sense if there was a set of Yarn scenarios which could validate a Yarn implementation. Such a document could also form the basis of a Yarn specification and also a manual for writing reasonable Yarn scenarios. As such, we wrote up a three-column approach to what we'd need in that test suite. Firstly we considered what the core features of the Yarn language are: We considered unusual (or corner) cases and which of them needed defining in the short to medium term: All of this comes down to how to interpret input to a Yarn implementation. In addition there were a number of things we felt any "normative" Yarn implementation would have to handle or provide in order to be considered useful. It's worth noting that we don't specify anything about an implementation being a command line tool though... There's bound to be more, but right now with the above, we believe we have two roughly conformant Yarn implementations. Lars' Python based implementation which lives in cmdtest (and which I shall refer to as pyyarn for now) and my Rust based one (rsyarn).
One thing which rsyarn supports, but pyyarn does not, is running multiple scenarios in parallel. However when I wrote that support into rsyarn I noticed that there were plenty of issues with running stuff in parallel. (A problem I'm sure any of you who know about threads will appreciate). One particular issue was that scenarios often need to share resources which are not easily sandboxed into the $ DATADIR provided by Yarn. For example databases or access to limited online services. Lars and I had a good chat about that, and decided that a reasonable language extension could be:
USING database foo
with its counterpart
RESOURCE database (\S+)
LABEL database-$1
GIVEN a database called $1
FINALLY database $1 is torn down
The USING statement should be reasonably clear in its pairing to a RESOURCE statement. The LABEL statement I'll get to in a moment (though it's only relevant in a RESOURCE block, and the rest of the statements are essentially substituted into the calling scenario at the point of the USING. This is nowhere near ready to consider adding to the specification though. Both Lars and I are uncomfortable with the $1 syntax though we can't think of anything nicer right now; and the USING/RESOURCE/LABEL vocabulary isn't set in stone either. The idea of the LABEL is that we'd also require that a normative Yarn implementation be capable of specifying resource limits by name. E.g. if a RESOURCE used a LABEL foo then the caller of a Yarn scenario suite could specify that there were 5 foos available. The Yarn implementation would then schedule a maximum of 5 scenarios which are using that label to happen simultaneously. At bare minimum it'd gate new users, but at best it would intelligently schedule them. In addition, since this introduces the concept of parallelism into Yarn proper, we also wanted to add a maximum parallelism setting to the Yarn implementation requirements; and to specify that any resource label which was not explicitly set had a usage limit of 1.
Once we'd discussed the parallelism, we decided that once we had a nice syntax for expanding these sets of statements anyway, we may as well have a syntax for specifying scenario language expansions which could be used to provide something akin to macros for Yarn scenarios. What we came up with as a starter-for-ten was:
CALLING write foo
paired with
EXPANDING write (\S+)
GIVEN bar
WHEN $1 is written to
THEN success was had by all
Again, the CALLING/EXPANDING keywords are not fixed yet, nor is the $1 type syntax, though whatever is used here should match the other places where we might want it.
Finally we discussed multi-line inputs in Yarn. We currently have a syntax akin to:
GIVEN foo
... bar
... baz
which is directly equivalent to:
GIVEN foo bar baz
and this is achieved by collapsing the multiple lines and using the whitespace normalisation functionality of Yarn to replace all whitespace sequences with single space characters. However this means that, for example, injecting chunks of YAML into a Yarn scenario is a pain, as would be including any amount of another whitespace-sensitive input language. After a lot of to-ing and fro-ing, we decided that the right thing to do would be to redefine the ... Yarn statement to be whitespace preserving and to then pass that whitespace through to be matched by the IMPLEMENTS or whatever. In order for that to work, the regexp matching would have to be defined to treat the input as a single line, allowing . to match \n etc. Of course, this would mean that the old functionality wouldn't be possible, so we considered allowing a \ at the end of a line to provide the current kind of behaviour, rewriting the above example as:
GIVEN foo \
bar \
baz
It's not as nice, but since we couldn't find any real uses of ... in any of our Yarn suites where having the whitespace preserved would be an issue, we decided it was worth the pain.
None of the above is, as of yet, set in stone. This blog posting is about me recording the information so that it can be referred to; and also to hopefully spark a little bit of discussion about Yarn. We'd welcome emails to our usual addresses, being poked on Twitter, or on IRC in the common spots we can be found. If you're honestly unsure of how to get hold of us, just comment on this blog post and I'll find your message eventually. Hopefully soon we can start writing that Yarn suite which can be used to validate the behaviour of pyyarn and rsyarn and from there we can implement our new proposals for extending Yarn to be even more useful.

13 November 2016

Andrew Cater: MiniDebconf Cambridge, ARM, 13/11/16 - Day 4 post 3

Just about to watch Daniel Silverstone demonstrating his new FPGA board.

Mystorm - open hardware - growing out of Icestorm. Programmable hardware - toolchains - and showing the hardware.

The Debian video team have advised that they are about to start the transcoding process - videos will be available soonish :)

http://video.debconf.org/minidebconf-low.webm


24 October 2016

Daniel Silverstone: Gitano - Approaching Release - Deprecated commands

As mentioned previously I am working toward getting Gitano into Stretch. Last time we spoke about lace, on which a colleague and friend of mine (Richard Maw) did a large pile of work. This time I'm going to discuss deprecation approaches and building more capability out of fewer features. First, a little background -- Gitano is written in Lua which is a deliberately small language whose authors spend more time thinking about what they can remove from the language spec than they do what they could add in. I first came to Lua in the 3.2 days, a little before 4.0 came out. (The authors provide a lovely timeline in case you're interested.) With each of the releases of Lua which came after 3.2, I was struck with how the authors looked to take a number of features which the language had, and collapse them into more generic, more powerful, smaller, fewer features. This approach to design stuck with me over the subsequent decade, and when I began Gitano I tried to have the smallest number of core features/behaviours, from which could grow the power and complexity I desired. Gitano is, at its core, a set of files in a single format (clod) stored in a consistent manner (Git) which mediate access to a resource (Git repositories). Some of those files result in emergent properties such as the concept of the 'owner' of a repository (though that can simply be considered the value of the project.owner property for the repository). Indeed the concept of the owner of a repository is a fiction generated by the ACL system with a very small amount of collusion from the core of Gitano. Yet until recently Gitano had a first class command set-owner which would alter that one configuration value.
[gitano]  set-description ---- Set the repo's short description (Takes a repo)
[gitano]         set-head ---- Set the repo's HEAD symbolic reference (Takes a repo)
[gitano]        set-owner ---- Sets the owner of a repository (Takes a repo)
Those of you with Gitano installations may see the above if you ask it for help. Yet you'll also likely see:
[gitano]           config ---- View and change configuration for a repository (Takes a repo)
The config command gives you access to the repository configuration file (which, yes, you could access over git instead, but the config command can be delegated in a more fine-grained fashion without having to write hooks). Given the config command has all the functionality of the three specific set-* commands shown above, it was time to remove the specific commands.

Migrating If you had automation which used the set-description, set-head, or set-owner commands then you will want to switch to the config command before you migrate your server to the current or any future version of Gitano. In brief, where you had:
ssh git@gitserver set-FOO repo something
You now need:
ssh git@gitserver config repo set project.FOO something
It looks a little more wordy but it is consistent with the other features that are keyed from the project configuration, such as:
ssh git@gitserver config repo set cgitrc.section Fooble Section Name
And, of course, you can see what configuration is present with:
ssh git@gitserver config repo show
Or look at a specific value with:
ssh git@gitserver config repo show specific.key
As always, you can get more detailed (if somewhat cryptic) help with:
ssh git@gitserver help config
Next time I'll try and touch on the new PGP/GPG integration support.

23 October 2016

Vincent Sanders: Rabbit of Caerbannog

Subsequent to my previous use of American Fuzzy Lop (AFL) on the NetSurf bitmap image library I applied it to the gif library which, after fixing the test runner, failed to produce any crashes but did result in a better test corpus improving coverage above 90%

I then turned my attention to the SVG processing library. This was different to the bitmap libraries in that it required parsing a much lower density text format and performing operations on the resulting tree representation.

The test program for the SVG library needed some improvement but is very basic in operation. It takes the test SVG, parses it using libsvgtiny and then uses the parsed output to write out an imagemagick mvg file.

The libsvg processing uses the NetSurf DOM library which in turn uses an expat binding to parse the SVG XML text. To process this with AFL required instrumenting not only the XVG library but the DOM library. I did not initially understand this and my first run resulted in a "map coverage" indicating an issue. Helpfully the AFL docs do cover this so it was straightforward to rectify.

Once the test program was written and environment set up an AFL run was started and left to run. The next day I was somewhat alarmed to discover the fuzzer had made almost no progress and was running very slowly. I asked for help on the AFL mailing list and got a polite and helpful response, basically I needed to RTFM.

I must thank the members of the AFL mailing list for being so helpful and tolerating someone who ought to know better asking dumb questions.

After reading the fine manual I understood I needed to ensure all my test cases were as small as possible and further that the fuzzer needed a dictionary as a hint to the file format because the text file was of such low data density compared to binary formats.

Rabbit of Caerbannog. Death awaits you with pointy teeth
I crafted an SVG dictionary based on the XML one, ensured all the seed SVG files were as small as possible and tried again. The immediate result was thousands of crashes, nothing like being savaged by a rabbit to cause a surprise.

Not being in possession of the appropriate holy hand grenade I resorted instead to GDB and electric fence. Unlike the bitmap library crashes memory bounds issues simply did not feature in the crashes.Instead they mainly centered around actual logic errors when constructing and traversing the data structures.

For example Daniel Silverstone fixed an interesting bug where the XML parser binding would try and go "above" the root node in the tree if the source closed more tags than it opened which resulted in wild pointers and NULL references.

I found and squashed several others including dealing with SVG which has no valid root element and division by zero errors when things like colour gradients have no points.

I find it interesting that the type and texture of the crashes completely changed between the SVG and binary formats. Perhaps it is just the nature of the textural formats that causes this although it might be due to the techniques used to parse the formats.

Once all the immediately reproducible crashes were dealt with I performed a longer run. I used my monster system as previously described and ran the fuzzer for a whole week.

Summary stats
=============

Fuzzers alive : 10
Total run time : 68 days, 7 hours
Total execs : 9268 million
Cumulative speed : 15698 execs/sec
Pending paths : 0 faves, 2501 total
Pending per fuzzer : 0 faves, 250 total (on average)
Crashes found : 9 locally unique

After burning almost seventy days of processor time AFL found me another nine crashes and possibly more importantly a test corpus that generates over 90% coverage.

A useful tool that AFL provides is afl-cmin. This reduces the number of test files in a corpus to only those that are required to exercise all the code paths reached by the test set. In this case it reduced the number of files from 8242 to 2612

afl-cmin -i queue_all/ -o queue_cmin -- test_decode_svg @@ 1.0 /dev/null
corpus minimization tool for afl-fuzz by

[+] OK, 1447 tuples recorded.
[*] Obtaining traces for input files in 'queue_all/'...
Processing file 8242/8242...
[*] Sorting trace sets (this may take a while)...
[+] Found 23812 unique tuples across 8242 files.
[*] Finding best candidates for each tuple...
Processing file 8242/8242...
[*] Sorting candidate list (be patient)...
[*] Processing candidates and writing output files...
Processing tuple 23812/23812...
[+] Narrowed down to 2612 files, saved in 'queue_cmin'.

Additionally the actual information within the test files can be minimised with the afl-tmin tool. This must be run on each file individually and can take a relatively long time. Fortunately with GNU parallel one can run many of these jobs simultaneously which merely required another three days of CPU time to process. The resulting test corpus weighs in at a svelte 15 Megabytes or so against the 25 Megabytes before minimisation.

The result is yet another NetSurf library significantly improved by the use of AFL both from finding and squashing crashing bugs and from having a greatly improved test corpus to allow future library changes with a high confidence there will not be any regressions.

19 October 2016

Reproducible builds folks: Reproducible Builds: week 77 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday October 9 and Saturday October 15 2016: Media coverage Documentation update After discussions with HW42, Steven Chamberlain, Vagrant Cascadian, Daniel Shahaf, Christopher Berg, Daniel Kahn Gillmor and others, Ximin Luo has started writing up more concrete and detailed design plans for setting SOURCE_ROOT_DIR for reproducible debugging symbols, buildinfo security semantics and buildinfo security infrastructure. Toolchain development and fixes Dmitry Shachnev noted that our patch for #831779 has been temporarily rejected by docutils upstream; we are trying to persuade them again. Tony Mancill uploaded javatools/0.59 to unstable containing original patch by Chris Lamb. This fixed an issue where documentation Recommends: substvars would not be reproducible. Ximin Luo filed bug 77985 to GCC as a pre-requisite for future patches to make debugging symbols reproducible. Packages reviewed and fixed, and bugs filed The following updated packages have become reproducible - in our current test setup - after being fixed: The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.) Some uploads have addressed some reproducibility issues, but not all of them: Some uploads have addressed nearly all reproducibility issues, except for build path issues: Patches submitted that have not made their way to the archive yet: Reviews of unreproducible packages 101 package reviews have been added, 49 have been updated and 4 have been removed in this week, adding to our knowledge about identified issues. 3 issue types have been updated: Weekly QA work During of reproducibility testing, some FTBFS bugs have been detected and reported by: tests.reproducible-builds.org Debian: Openwrt/LEDE/NetBSD/coreboot/Fedora/archlinux: Misc. We are running a poll to find a good time for an IRC meeting. This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

14 October 2016

Daniel Silverstone: Gitano - Approaching Release - Changes

Continuing on from the previous article, here is a (probably incomplete) list of the critical changes to Gitano which have been, or will be, worked on during the run toward a 1.0 release. Each of these will have a blog posting to discuss what the changes mean for current and future users. Sometimes I'll aggregate postings, sometimes I won't. The following are some highlights from the past little while of development which has been undertaken by Richard and myself. Each item is, I feel, important enough to warrant commentary, even for those who already use Gitano. Any number of smaller things have been done which fall below some arbitrary barrier for telling you about. If you're aware of any of them and feel they are worthwhile telling the world about, then please prod me and I'll add an article to the series. Finally it's worth noting that the effort to get all this into Debian Stretch proceeds apace. Of the eight packages needed, at the time of posting: one was already in and has been updated (luxio), three have been accepted into Debian already (supple, clod, lua-scrypt), two are in NEW (gall and lace), and that leaves the newest library (tongue) and then Gitano itself still to go. The Debian FTP team have been awesome in helping me with all this, so thanks go to them.

6 October 2016

Reproducible builds folks: Reproducible Builds: week 75 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday September 25 and Saturday October 1 2016: Statistics For the first time, we reached 91% reproducible packages in our testing framework on testing/amd64 using a determistic build path. (This is what we recommend to make packages in Stretch reproducible.) For unstable/amd64, where we additionally test for reproducibility across different build paths we are at almost 76% again. IRC meetings We have a poll to set a time for a new regular IRC meeting. If you would like to attend, please input your available times and we will try to accommodate for you. There was a trial IRC meeting on Friday, 2016-09-31 1800 UTC. Unfortunately, we did not activate meetbot. Despite this participants consider the meeting a success as several topics where discussed (eg changes to IRC notifications of tests.r-b.o) and the meeting stayed within one our length. Upcoming events Reproduce and Verify Filesystems - Vincent Batts, Red Hat - Berlin (Germany), 5th October, 14:30 - 15:20 @ LinuxCon + ContainerCon Europe 2016. From Reproducible Debian builds to Reproducible OpenWrt, LEDE & coreboot - Holger "h01ger" Levsen and Alexander "lynxis" Couzens - Berlin (Germany), 13th October, 11:00 - 11:25 @ OpenWrt Summit 2016. Introduction to Reproducible Builds - Vagrant Cascadian will be presenting at the SeaGL.org Conference In Seattle (USA), November 11th-12th, 2016. Previous events GHC Determinism - Bartosz Nitka, Facebook - Nara (Japan), 24th September, ICPF 2016. Toolchain development and fixes Michael Meskes uploaded bsdmainutils/9.0.11 to unstable with a fix for #830259 based on Reiner Herrmann's patch. This fixed locale_dependent_symbol_order_by_lorder issue in the affected packages (freebsd-libs, mmh). devscripts/2.16.8 was uploaded to unstable. It includes a debrepro script by Antonio Terceiro which is similar in purpose to reprotest but more lightweight; specific to Debian packages and without support for virtual servers or configurable variations. Packages reviewed and fixed, and bugs filed The following updated packages have become reproducible in our testing framework after being fixed: The following updated packages appear to be reproducible now for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.) Some uploads have addressed some reproducibility issues, but not all of them: Patches submitted that have not made their way to the archive yet: Reviews of unreproducible packages 77 package reviews have been added, 178 have been updated and 80 have been removed in this week, adding to our knowledge about identified issues. 6 issue types have been updated: Weekly QA work As part of reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development A new version of diffoscope 61 was uploaded to unstable by Chris Lamb. It included contributions from: Post-release there were further contributions from: reprotest development A new version of reprotest 0.3.2 was uploaded to unstable by Ximin Luo. It included contributions from: Post-release there were further contributions from: tests.reproducible-builds.org Misc. This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

4 October 2016

Daniel Silverstone: Gitano - Approaching Release - Work

I have been working quite hard, along with my friend and colleague Richard Maw, on getting Gitano ready for a release suitable for inclusion into Debian Stretch. You can see how we're doing on the various Trello boards for: As Richard and I work toward a version of Gitano we're prepared to support long-term in Debian we are making many changes to make our lives easier. For those of you who have been using Gitano over the past few years, you'll need to pay attention to some postings which will be coming soon about how to make the changes you need so as to not explode horribly when you upgrade to the version we're releasing soon. For those of you who are not yet using Gitano but feel like you might want to; I'll also be producing some postings about getting started with the packages. And for those happily running current HEAD of Gitano already, I'll be posting about some of the new features over the next little while in case you're not aware of them.
IMPORTANT: If you're using Gitano already and have any issues or feature requests then please please please let me know ASAP otherwise they're unlikely to be resolved/implemented before 1.0. irl already asked for the facility to verify GPG signed commits and tags, but if you want anything else considering then I need to know v. soon. (Ideally email me, but you may comment on this posting too if you must)

1 October 2016

Vincent Sanders: Paul Hollywood and the pistoris stone

There has been a great deal of comment among my friends recently about a particularly British cookery program called "The Great British Bake Off". There has been some controversy as the program is moving from the BBC to a commercial broadcaster.

Part of this discussion comes from all the presenters, excepting Paul Hollywood, declining to sign with the new broadcaster and partly because of speculation the BBC might continue with a similar format show with a new name.

Rob Kendrick provided the start to this conversation by passing on a satirical link suggesting Samuel L Jackson might host "cakes on a plane"

This caused a large number of suggestions for alternate names which I will be reporting but Rob Kendrick, Vivek Das Mohapatra, Colin Watson, Jonathan McDowell, Oki Kuma, Dan Alderman, Dagfinn Ilmari Manns ke, Lesley Mitchell and Daniel Silverstone are the ones to blame.




So that is our list, anyone else got better ideas?

23 June 2016

Jonathan McDowell: Fixing missing text in Firefox

Every now and again I get this problem where Firefox won t render text correctly (on a Debian/stretch system). Most websites are fine, but the odd site just shows up with blanks where the text should be. Initially I thought it was NoScript, but turning that off didn t help. Daniel Silverstone gave me a pointer today that the pages in question were using webfonts, and that provided enough information to dig deeper. The sites in question were using Cantarell, via:
src: local('Cantarell Regular'), local('Cantarell-Regular'), url(cantarell.woff2) format('woff2'), url(cantarell.woff) format('woff');
The Firefox web dev inspector didn t show it trying to fetch the font remotely, so I removed the local() elements from the CSS. That fixed the page, letting me pinpoint the problem as a local font issue. I have fonts-cantarell installed so at first I tried to remove it, but that breaks gnome-core. So instead I did an fc-list grep -i cant to ask fontconfig what it thought was happening. That gave:
/usr/share/fonts/opentype/cantarell/Cantarell-Regular.otf.dpkg-tmp: Cantarell:style=Regular
/usr/share/fonts/opentype/cantarell/Cantarell-Bold.otf.dpkg-tmp: Cantarell:style=Bold
/usr/share/fonts/opentype/cantarell/Cantarell-Bold.otf: Cantarell:style=Bold
/usr/share/fonts/opentype/cantarell/Cantarell-Oblique.otf: Cantarell:style=Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-Regular.otf: Cantarell:style=Regular
/usr/share/fonts/opentype/cantarell/Cantarell-Bold-Oblique.otf: Cantarell:style=Bold-Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-Oblique.otf.dpkg-tmp: Cantarell:style=Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-BoldOblique.otf: Cantarell:style=BoldOblique
Hmmm. Those .dpkg-tmp files looked odd, and sure enough they didn t actually exist. So I did a sudo fc-cache -f -v to force a rebuild of the font cache and restarted Firefox (it didn t seem to work before doing so) and everything works fine now. It seems that fc-cache must have been run at some point when dpkg had not yet completed installing an update to the fonts-cantarell package. That seems like a bug - fontconfig should probably ignore .dpkg* files, but equally I wouldn t expect it to be run before dpkg had finished its unpacking stage fully.

29 February 2016

Daniel Silverstone: Kicad hacking - Intra-sheet links and ERC

This is a bit of an odd posting since it's about something I've done but is also here to help me explain why I did it and thus perhaps encourage some discussion around the topic within the Kicad community...
Recently (as you will know if you follow this blog anywhere it is syndicated) I have started playing with Kicad for the development of some hardware projects I've had a desire for. In addition, some of you may be aware that I used to work for a hardware/software consultancy called Simtec, and there I got to play for a while with an EDA tool called Mentor Designview. Mentor was an expensive, slow, clunky, old-school EDA tool, but I grew to understand and like the workflow. I spent time looking at gEDA and Eagle when I wanted to get back into hardware hacking for my own ends; but neither did I really click with. On the other hand, a mere 10 minutes with Kicad and I knew I had found the tool I wanted to work with long-term. I designed the beer'o'meter project (a flow meter for the pub we are somehow intimately involved with) and then started on my first personal surface-mount project -- SamDAC which is a DAC designed to work with our HiFi in our study at home. As I worked on the SamDAC project, I realised that I was missing a very particular thing from Mentor, something which I had low-level been annoyed by while looking at other EDA tools -- Kicad lacks a mechanism to mark a wire as being linked to somewhere else on the same sheet. Almost all of the EDA tools I've looked at seem to lack this nicety, and honestly I miss it greatly, so I figured it was time to see if I could successfully hack on Kicad. Kicad is written in C++, and it has been mumble mumble years since I last did any C++, either for personal hacking or professionally, so it took a little while for that part of my brain to kick back in enough for me to grok the codebase. Kicad is not a small project, taking around ten minutes to build on my not-inconsiderable computer. And while it beavered away building, I spent time looking around the source code, particularly the schematic editor eeschema. To skip ahead a bit, after a couple of days of hacking around, I had a proof-of-concept for the intra-sheet links which I had been missing from my days with Mentor, and some ERC (electrical rules checking) to go alongside that to help produce schematics without unwanted "sharp corners". In total, I added: I forked the Kicad mirror on Github and pushed my own branch with this work to my Kicad fork. All of this is meant to allow schematic capture engineers to more clearly state their intentions regarding what they are drawing. The intra-sheet link could be thought of like a no-connect element, except instead of saying "this explicitly goes nowhere" we're saying "this explicitly goes somewhere else on this sheet, you can go look for it". Obviously, people who dislike (or simply don't want to use) such intra-sheet link elements can just disable that ERC tickybox and not be bothered by them in the least (well except for the toolbar button and menu item I suppose). Whether this work gets accepted into Kicad, or festers and dies on the vine, it was good fun developing it and I'd like to illustrate how it could help you, and why I wrote it in the first place:

A contrived story
Note, while this story is meant to be taken seriously, it is somewhat contrived, the examples are likely electrical madness, but please just think about the purpose of the checks etc.
To help to illustrate the feature and why it exists, I'd like to tell you a somewhat contrived story about Fred. Fred is a schematic capture engineer and his main job is to review schematics generated by his colleagues. Fred and his colleagues work with Kicad (hurrah) but of late they've been having a few issues with being able to cleanly review schematics. Fred's colleagues are not the neatest of engineers. In particular they tend to be quite lazy when it comes to running busses, which are not (for example) address and data busses, around their designs and they tend to simply have wires which end in mid-space and pick up somewhere else on the sheet. All this is perfectly reasonable of course, and Kicad handles it with aplomb. Sadly it seems quite error prone for Fred's workplace. As an example, Fred's colleague Ben has been designing the power supply for a particular board. As with most power supplies, plenty of capacitors are needed to stabilise the regulators and smooth the output. In the example below, the intent is that all of the capacitors are on the FOO net. Contrived problem example 1 Sadly there's a missing junction and/or slightly misplaced label in the upper section which means that C2 and C3 simply don't join to the FOO net. This could easily be missed, but the ERC can't spot it at all since there's more than one thing on each net, so the pins of the capacitors are connected to something. Fred is very sad that this kind of problem can sometimes escape notice by the schematic designer Ben, Fred himself, and the layout engineer, resulting in boards which simply do not work. Fred takes it upon himself to request that the strict wiring checks ERC is made mandatory for all designs, and that the design engineers be required to use intra-sheet link symbols when they have signals which wander off to other parts of the sheet like FOO does in the example. Without any further schematic changes, strict wiring checks enabled gives the following points of ERC concern for Ben to think about: Contrived problem example 2 As you can see, the ERC is pointing at the wire ends and the warnings are simply that the wires are dangling and that this is not acceptable. This warning is very like the pin-not-connected warnings which can be silenced with an explicit no-connect schematic element. Ben, being a well behaved and gentle soul, obeys the design edicts from Fred and seeks out the intra-sheet link symbols, clearing off the ERC markers and then adding intra-sheet links to his design: Contrived problem example 3 This silences the dangling end ERC check, which is good, however it results in another ERC warning: Contrived problem example 4 This time, the warning for Ben to deal with is that the intra-sheet links are pointless. Each exists without a companion to link to because of the net name hiccough in the top section. It takes Ben a moment to realise that the mistake which has been made is that a junction is missing in the top section. He adds the junction and bingo the ERC is clean once more: Contrived problem example 5 Now, this might not seem like much gain for so much effort, but Ben can now be more confident that the FOO net is properly linked across his design and Fred can know, when he looks at the top part of the design, that Ben intended for the FOO net to go somewhere else on the sheet and he can look for it.

Why do this at all? Okay, dropping out of our story now, let's discuss why these ERC checks are worthwhile and why the intra-sheet link schematic element is needed.
Note: This bit is here to remind me of why I did the work, and to hopefully explain a little more about why I think it's worth adding to Kicad...
Designers are (one assumes) human beings. As humans we (and I count myself here too) are prone to mistakes. Sadly mistakes are often subtle and could easily be thought of as deliberate if the right thought processes are not followed carefully when reviewing. Anyone who has ever done code review, proofread a document, or performed any such activity, will be quite familiar with the problems which can be introduced by a syntactically and semantically valid construct which simply turns out to be wrong in the greater context. When drawing designs, I often end up with bits of wire sticking out of schematic sections which are not yet complete. Sadly if I sleep between design sessions, I often lose track of whether such a dangling wire is meant to be attached to more stuff, or is simply left because the net is picked up elsewhere on the sheet. With intra-sheet link elements available, if I had intended the latter, I'd have just dropped such an element on the end of the wire before I stopped for the day. Also, when drawing designs, I sometimes forget to label a wire, especially if it has just passed through a filter or current-limiting resistor or similar. As such, even with intra-sheet link elements to show me when I mean for a net to go bimbling off across the sheet, I can sometimes end up with unnamed nets whose capacitors end up not used for anything useful. This is where the ERC comes in. By having the ERC complain if a wire dangles -- the design engineer won't forget to add links (or check more explicitly if the wire is meant to be attached to something else). By having junctions which don't actually link anything warned about, the engineer can't just slap a junction blob down on the end of a wire to silence that warning, since that doesn't mean anything to a reviewer later down the line. By having the ERC warn if a net has exactly one intra-sheet link attached to it, missing net names and errors such as that shown in my contrived example above can be spotted and corrected. Ultimately this entire piece of work is about ensuring that the intent of the design engineer is captured clearly in the schematic. If the design engineer meant to leave that wire dangling because it's joining to another bit of wire elsewhere on the sheet, they can put the intra-sheet links in to show this. The associated ERC checks are there purely to ensure that the validation of this intent is not bypassed accidentally, or deliberately, in order to make the use of this more worthwhile and to increase the usefulness of the ERC on designs where signals jump around on sheets where wiring them up directly would just create a mess.

31 January 2016

Daniel Silverstone: The Beer'o'Meter project

As some of you may know, I have been working on a small hardware project called the Beer'o'Meter whose purpose is to allow us to extend Ye Olde Vic's beer board to indicate the approximate fullness of each cask. For some time now, we've been operating an electronic beer board at the Vic which you may see tweeted out from time to time. The pumpotron has become very popular with the visitors to the pub, especially that it can be viewed online in a basic textual form. Of course, as many of you who visit pubs know only too well. That a beer is "on" is no indication of whether or not you need to get there sharpish to have a pint, or if you can take your time and have a curry first. As a result, some of us have noticed a particular beer on, come to the pub after dinner, and then been very sad that if only we'd come 30 minutes previously, we'd have had a chance at the very beer we were excited about. Combine this kind of sadness with a two week break at Christmas, and I started to develop a Beer'o'Meter to extend the pumpotron with an indication of how much of a given beer had already been served. Recently my boards came back from Elecrow along with various bits and bobs, and I have spent some time today building one up for test purposes. As always, it's important to start with some prep work to collect all the necessary components. I like to use cake cases as you may have noticed on the posting yesterday about the oscilloscope I built. Component prep for the Beer'o'Meter Naturally, after prep comes the various stages of assembly. You start with the lowest-height components, so here's the board after I fitted the ceramic capacitors: Step 1, ceramic capacitors And here's after I fitted the lying-down electrolytic decoupling capacitor for the 3.3 volt line: Step 2, capacitors which lie down Next I should have fitted the six transitors from the middle cake case, but I discovered that I'd used the wrong pinout for them. Even after weeks of verification by myself and others, I'd made a mistake. My good friend Vincent Sanders recently posted about how creativity is allowing yourself to make mistakes and here I had made a doozy I hadn't spotted until I tried to assemble the board. Fortunately TO-92 transistors have nice long legs and I have a pair of tweezers and some electrical tape. As such I soon had six transistors doing the river dance: Transistors doing the river-dance With that done, I noticed that the transistors now stood taller than the pins (previously I had been intending to fit the transistors before the pins) so I had to shuffle things around and fit all my 0.1" pins and sockets next: Step 3, pins and sockets Then I could fit my dancing transistors: Step 4, transistors We're almost finished now, just one more capacitor to provide some input decoupling on the 9v power supply: Finished -- decoupling fitted Of course, it wouldn't be complete without the ESP8266Huzzah I acquired from AdaFruit though I have to say that I'm unlikely to use these again, but rather I might design in the surface-mount version of the module instead. Fitted with the module And since this is the very first Beer'o'Meter to be made, I had to go and put a 1 on the serial-number space on the back of the board. I then tried to sign my name in the box, made a hash of it, so scribbled in the gap :-) The back of the finished module Finally I got to fit all six of my flow meters ready for some testing. I may post again about testing the unit, but for now, here's a big spider of a flow meter for beer: The Beer'o'Meter spider This has been quite a learning experience for me, and I hope in the future to be able to share more of my hardware projects, perhaps from an earlier stage. I have plans for a DAC board, and perhaps some other things.

Daniel Silverstone: Building an Oscilloscope

I recently ordered some PCBs from Elecrow for the Vic's beer-measurement system I've been designing with Rob. While on the site, I noticed that they have a single-channel digital oscilloscope kit based on an STM32. This is a JYE Tech DSO138 which arrives as a PCB whose surface-mount stuff has been fitted, along with a whole bunch of pin-through components for you to solder up the scope yourself. There's a non-trivial number of kinds of components, so first you should prep by splitting them all up and double-checking them all. Preparing the components Once you've done that, the instructions start you off fitting a whole bunch of resistors... Step 1, fitting resistors Then some diodes, RF chokes, and the 8MHz crystal for the STM32. Step 2, fitting diodes, a crystal, and chokes The single most-difficult bit for me to solder was the USB socket. Fine pitch leads, coupled with high-thermal-density socket. Step 3, the USB socket There is a veritable mountain of ceramic capacitors to fit... Step 4, all the ceramic capacitors And then buttons, inductors, trimming capacitors and much more... Step 5, buttons, inductors, trimming capacitors, etc THe switches were the next hardest things to solder, after the USB socket... Step 6, Switches, connectors, etc Finally you have to solder a test loop and close some jumpers before you power-test the board. Step 7, Test loop and jumper soldering The last bit of soldering is to solder pins to the LCD panel board... Step 8, LCD panel Before you finally have a working oscilloscope Working oscilloscope! I followed the included instructions to trim the scope using the test point and the trimming capacitors, before having a break to write this up for you all. I'd say that it was a fun day because I enjoyed getting a lot of soldering practice (before I have to solder up the beer'o'meter for the pub) and at the end of it I got a working oscilloscope. For 40 USD, I'd recommend this to anyone who fancies a go.

8 November 2015

Daniel Pocock: Problems observed during Cambridge mini-DebConf RTC demo

A few problems were observed during the demo of RTC services at the Cambridge mini-DebConf yesterday. As it turns out, many of them are already documented and solutions are available for some of them. Multiple concurrent SIP registrations I had made some test calls on Friday using rtc.debian.org and I still had the site open in another tab in another browser window. When people tried to call me during the demo, both tabs were actually ringing but only one was visible. When a SIP client registers, the SIP registration server sends it a list of all other concurrent registrations in the response message. We simply need to extend JSCommunicator to inspect the response message and give some visual feedback about other concurrent registrations. Issue #69. SIP also provides a mechanism to clear concurrent registrations and that could be made available with a button or configuration option too (Issue #9). Callee hears ringing before connectivity checks completed The second issue during the WebRTC demo was that the callee (myself) was alerted about the call before the ICE checks had been performed. The optimal procedure to provide a slick user experience is to run the connectivity checks before alerting the callee. If the connectivity checks fail, the callee should never be alerted with a ringing sound and would never know somebody had tried to call. The caller would be told that the call was unable to be attempted and encouraged to consider trying again on another wifi network. RFC 5245 recommends that connectivity checks should be done first but it is not mandatory. One reason this is problematic with WebRTC is the need to display the pop-up asking the user for permission to share their microphone and webcam: the popup must appear before connectivity checks can commence. This has been discussed in the JsSIP issue tracker. Non-WebRTC softphones, such as Lumicall, do the connectivity checks before alerting the callee. Dealing with UDP blocking It appears the corporate wifi network in the venue was blocking the UDP packets so the connectivity checks could never complete, not even using a TURN server to relay the packets. People trying to use the service on home wifi networks, in small offices and mobile tethering should not have this problem as these services generally permit UDP by default. Some corporate networks, student accommodation and wifi networks in some larger hotels have blocked UDP and in these cases, additional effort must be made to get through the firewall. The TURN server we are running for rtc.debian.org also supports a TLS transport but it simply isn't configured yet. At the time we originally launched the WebRTC service in 2013, the browsers didn't support TURN over TLS at all but now they do. This is probably the biggest problem encountered during the demo but it does not require any code change to resolve this, just configuration, so a solution is well within reach. During the demo, we worked around the issue by turning off the wifi on my laptop and using tethering with a 4G mobile network. All the calls made successfully during the demo used the tethering solution. Add a connectivity check timeout The ICE connectivity checks appeared to keep running for a long time. Usually, if UDP is not blocked, the ICE checks would complete in less than two seconds. Therefore, the JavaScript needs to set a timeout between two and five seconds when it starts the checks and give the user a helpful warning about their network problems if the timeout is exceeded. Issue #73 in JSCommunicator. While these lengthy connectivity checks appear disappointing, it is worth remembering that this is an improvement over the first generation of softphones: none of them made these checks at all, they would simply tell the user the call had been answered but audio and video would only be working in one direction or not at all. Microphone issues One of the users calling into the demo, Juliana, was visible on the screen but we couldn't hear her. This was a local audio hardware issue with her laptop or headset. It would be useful if the JavaScript could provide visual feedback when it detects a voice (issue #74) and even better, integrating with the sound settings so that the user can see if the microphone is muted or the gain is very low (issue #75). Thanks to participants in the demo I'd like to thank all the participants in the demo, including Juliana Louback who called us from New York, Laura Arjona who called us from Madrid, Daniel Silverstone who called from about three meters away in the front row and Iain Learmonth who helped co-ordinate the test calls over IRC. Thanks are also due to Steve McIntyre, the local Debian community, ARM and the other sponsors for making another mini-DebConf in the UK this year.

7 November 2015

Andrew Cater: Debian miniconf, Cambridge - ARM, 1430 7 November

A couple of good quick lightning talks from Dmitri Ledkovs, Daniel Silverstone and Phil Hands

A good talk on mass-deployment of Debian for Durham University's workstations. Cue a small huddle of people talking similar topics and exchanging useful information

Lars Wirzenius is now talking about how to speed up Obnam, his backup program.

Auditorium in silence - some of us heads down, hunched over laptops, but laptop keyboards are silent these days.

Andrew Cater: Mini Debconf ARM Cambridge 1115 7 November

Between talks - coding and testing going on in the row infront of me - vmdebootstrap rework for Debian Live CD still going on.

Beaglebone SBC hanging from a Thinkpad in the row behind

Others talking - most getting coffee

Next up - Daniel Silverstone on Lua

4 November 2015

Vincent Sanders: I am not a number I am a free man

Once more the NetSurf developers tried to escape from a mysterious village by writing web browser code.

Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders at NetSurf Developer workshop
The sixth developer workshop was an opportunity for us to gather together in person to contribute to NetSurf.

We were hosted by Codethink in their Manchester offices which provided a comfortable and pleasant space to work in.

Four developers managed to attend in person from around the UK: Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders.

The main focus of the weekends activities was to work on improving our JavaScript implementation. At the previous workshop we had laid the groundwork for a shift to the Duktape JavaScript engine and since then put several hundred hours of time into completing this transition.

During this weekend Daniel built upon this previous work and managed to get DOM events working. This was a major missing piece of implementation which will mean NetSurf will be capable of interpreting JavaScript based web content in a more complete fashion. This work revealed several issues with our DOM library which were also resolved.

We were also able to merge several improvements provided by the Duktape upstream maintainer Sami Vaarala which addressed performance problems with regular expressions which were causing reports of "hangs" on slow processors.

The responsiveness of Sami and the Ducktape project has been a pleasant surprise making our switch to the library look like an increasingly worthwhile effort.

Overall some good solid progress was made on JavaScript support. Around half of the DOM interfaces in the specifications have now been implemented leaving around fifteen hundred methods and properties remaining. There is an aim to have this under the thousand mark before the new year which should result in a generally useful implementation of the basic interfaces.

Once the DOM interfaces have been addressed our focus will move onto the dynamic layout engine necessary to allow rendering of the changing content.

The 3.4 release is proposed to occur sometime early in the new year and depends on getting the JavaScript work to a suitable stable state.

Dave joined us for the first time, he was principally concerned with dealing with bugs and the bug tracker. It was agreeable to have a new face at the meeting and some enthusiasm for the RISC OS port which has been lacking an active maintainer for some time.

The turnout for this workshop was the same as the previous one and the issues raised then are still true. We still have a very small active core team who can commit only limited time which is making progress very slow and are lacking significant maintenance for several frontends.

Overall we managed to pack 16 hours of work into the weekend and addressed several significant problems.

Next.

Previous.